skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Yuan,  Jiayi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available November 12, 2025
  2. Free, publicly-accessible full text available November 12, 2025
  3. Transfer learning leverages feature representations of deep neural networks (DNNs) pretrained on source tasks with rich data to empower effective finetuning on downstream tasks. However, the pre-trained models are often prohibitively large for delivering generalizable representations, which limits their deployment on edge devices with constrained resources. To close this gap, we propose a new transfer learning pipeline, which leverages our finding that robust tickets can transfer better, i.e., subnetworks drawn with properly induced adversarial robustness can win better transferability over vanilla lottery ticket subnetworks. Extensive experiments and ablation studies validate that our proposed transfer learning pipeline can achieve enhanced accuracy-sparsity trade-offs across both diverse downstream tasks and sparsity patterns, further enriching the lottery ticket hypothesis. 
    more » « less
  4. Social ambiance describes the context in which social interactions happen, and can be measured using speech audio by counting the number of concurrent speakers. This measurement has enabled various mental health tracking and human-centric IoT applications. While on-device Socal Ambiance Measure (SAM) is highly desirable to ensure user privacy and thus facilitate wide adoption of the aforementioned applications, the required computational complexity of state-of-the-art deep neural networks (DNNs) powered SAM solutions stands at odds with the often constrained resources on mobile devices. Furthermore, only limited labeled data is available or practical when it comes to SAM under clinical settings due to various privacy constraints and the required human effort, further challenging the achievable accuracy of on-device SAM solutions. To this end, we propose a dedicated neural architecture search framework for Energy-efficient and Real-time SAM (ERSAM). Specifically, our ERSAM framework can automatically search for DNNs that push forward the achievable accuracy vs. hardware efficiency frontier of mobile SAM solutions. For example, ERSAM-delivered DNNs only consume 40 mW • 12 h energy and 0.05 seconds processing latency for a 5 seconds audio segment on a Pixel 3 phone, while only achieving an error rate of 14.3% on a social ambiance dataset generated by LibriSpeech. We can expect that our ERSAM framework can pave the way for ubiquitous on-device SAM solutions which are in growing demand. 
    more » « less
  5. Abstract Bacterial populations are highly adaptive. They can respond to stress and survive in shifting environments. How the behaviours of individual bacteria vary during stress, however, is poorly understood. To identify and characterize rare bacterial subpopulations, technologies for single-cell transcriptional profiling have been developed. Existing approaches show some degree of limitation, for example, in terms of number of cells or transcripts that can be profiled. Due in part to these limitations, few conditions have been studied with these tools. Here we develop massively-parallel, multiplexed, microbial sequencing (M3-seq)—a single-cell RNA-sequencing platform for bacteria that pairs combinatorial cell indexing with post hoc rRNA depletion. We show that M3-seq can profile bacterial cells from different species under a range of conditions in single experiments. We then apply M3-seq to hundreds of thousands of cells, revealing rare populations and insights into bet-hedging associated with stress responses and characterizing phage infection. 
    more » « less
  6. Although the 3D structures of active and inactive cannabinoid receptors type 2 (CB2) are available, neither the X-ray crystal nor the cryo-EM structure of CB2-orthosteric ligand-modulator has been resolved, prohibiting the drug discovery and development of CB2 allosteric modulators (AMs). In the present work, we mainly focused on investigating the potential allosteric binding site(s) of CB2. We applied different algorithms or tools to predict the potential allosteric binding sites of CB2 with the existing agonists. Seven potential allosteric sites can be observed for either CB2-CP55940 or CB2-WIN 55,212-2 complex, among which sites B, C, G and K are supported by the reported 3D structures of Class A GPCRs coupled with AMs. Applying our novel algorithm toolset-MCCS, we docked three known AMs of CB2 including Ec2la (C-2), trans-β-caryophyllene (TBC) and cannabidiol (CBD) to each site for further comparisons and quantified the potential binding residues in each allosteric binding site. Sequentially, we selected the most promising binding pose of C-2 in five allosteric sites to conduct the molecular dynamics (MD) simulations. Based on the results of docking studies and MD simulations, we suggest that site H is the most promising allosteric binding site. We plan to conduct bio-assay validations in the future. 
    more » « less
  7. Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lens-based cameras; (2) high communication cost required between the camera and backend processor; and (3) potentially concerned low visual privacy, thus prohibiting their more extensive applications. To this end, we propose, develop, and validate a lensless FlatCambased eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efficiency without sacrificing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams instead of lens-based cameras to facilitate the small form-factor need in mobile eye tracking systems, which also leaves rooms for a dedicated sensing-processor co-design to reduce the required camera-processor communication latency. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that first predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations and data movements. On the hardware level, we further develop a dedicated accelerator that (1) integrates a novel workload orchestration between the aforementioned segmentation and gaze estimation models, (2) leverages intra-channel reuse opportunities for depth-wise layers, (3) utilizes input feature-wise partition to save activation memory size, and (4) develops a sequential-write-parallel-read input buffer to alleviate the bandwidth requirement for the activation global buffer. On-silicon measurement and extensive experiments validate that our EyeCoD consistently reduces both the communication and computation costs, leading to an overall system speedup of 10.95×, 3.21×, and 12.85× over general computing platforms including CPUs and GPUs, and a prior-art eye tracking processor called CIS-GEP, respectively, while maintaining the tracking accuracy. Codes are available at https://github.com/RICE-EIC/EyeCoD. 
    more » « less